Metric entropy limits on recurrent neural network learning of linear dynamical systems
نویسندگان
چکیده
One of the most influential results in neural network theory is universal approximation theorem [1], [2], [3] which states that continuous functions can be approximated to within arbitrary accuracy by single-hidden-layer feedforward networks. The purpose this paper establish a result spirit for general discrete-time linear dynamical systems—including time-varying systems—by recurrent networks (RNNs). For subclass time-invariant (LTI) systems, we devise quantitative version statement. Specifically, measuring complexity considered class LTI systems through metric entropy according [4], show RNNs optimally learn—or identify system-theory parlance—stable systems. whose input-output relation characterized difference equation, means learn equation from traces metric-entropy optimal manner.
منابع مشابه
Entropy operator for continuous dynamical systems of finite topological entropy
In this paper we introduce the concept of entropy operator for continuous systems of finite topological entropy. It is shown that it generates the Kolmogorov entropy as a special case. If $phi$ is invertible then the entropy operator is bounded with the topological entropy of $phi$ as its norm.
متن کاملENTROPY OF DYNAMICAL SYSTEMS ON WEIGHTS OF A GRAPH
Let $G$ be a finite simple graph whose vertices and edges are weighted by two functions. In this paper we shall define and calculate entropy of a dynamical system on weights of the graph $G$, by using the weights of vertices and edges of $G$. We examine the conditions under which entropy of the dynamical system is zero, possitive or $+infty$. At the end it is shown that, for $rin [0,+infty]$, t...
متن کاملA Recurrent Neural Network Model for Solving Linear Semidefinite Programming
In this paper we solve a wide rang of Semidefinite Programming (SDP) Problem by using Recurrent Neural Networks (RNNs). SDP is an important numerical tool for analysis and synthesis in systems and control theory. First we reformulate the problem to a linear programming problem, second we reformulate it to a first order system of ordinary differential equations. Then a recurrent neural network...
متن کاملLearning Stable Linear Dynamical Systems Learning Stable Linear Dynamical Systems
Stability is a desirable characteristic for linear dynamical systems, but it is often ignored by algorithms that learn these systems from data. We propose a novel method for learning stable linear dynamical systems: we formulate an approximation of the problem as a convex program, start with a solution to a relaxed version of the program, and incrementally add constraints to improve stability. ...
متن کاملA recurrent neural network for modelling dynamical systems.
We introduce a recurrent network architecture for modelling a general class of dynamical systems. The network is intended for modelling real-world processes in which empirical measurements of the external and state variables are obtained at discrete time points. The model can learn from multiple temporal patterns, which may evolve on different timescales and be sampled at non-uniform time inter...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Applied and Computational Harmonic Analysis
سال: 2022
ISSN: ['1096-603X', '1063-5203']
DOI: https://doi.org/10.1016/j.acha.2021.12.004